Adil Baig
Abstract
Video Quality Assessment (VQA) is a critical component of various technologies, including automated video broadcasting through displaying technologies. Moreover, determining visual quality necessitates a balanced examination of visual features and functionality. Previous research has also shown that ...
Read More
Video Quality Assessment (VQA) is a critical component of various technologies, including automated video broadcasting through displaying technologies. Moreover, determining visual quality necessitates a balanced examination of visual features and functionality. Previous research has also shown that features derived from pre-trained models of Convolutional Neural Networks (CNNs) are extremely useful in various image analysis and computer vision activities. Based on characteristics collected from pre-trained models of deep neural networks, transfer learning, periodic pooling, and regression, we created a unique architecture for No Reference Video Quality Assessment (NR-VQA) in this research. We were able to get results by solely employing dynamically pooled deep features and avoiding the use of manually produced features. This study describes a novel, deep learning-based strategy for NR-VQA that uses several pre-trained deep neural networks to characterize probable image and video distortions across parallel. A set of pre-trained CNNs extract spatially pooling and intensity-adjusted video-level feature representations, which are then individually mapped onto subjective peer assessments. Ultimately, the perceived quality of a video series is calculated by combining the quality standards from the various regressors. Numerous researches demonstrate that the suggested approach on two large baseline video quality analysis datasets with realistic aberrations sets a new state-of-the-art. Furthermore, the findings show that combining the decisions of different deep networks can greatly improve NR-VQA.